Goto

Collaborating Authors

 poisoning attack









Appendix of RECESS A Additional Related Works A.1 Federated Learning FedAvg. FedAvg [

Neural Information Processing Systems

The aggregation gradient is a weighted average of each client's upload gradient, and the weight is determined by the number of However, the aggregation gradient, i.e., the global model, is vulnerable to poisoning From the perspective of the attacker's goal, poisoning attacks are categorized as targeted and untar-geted attacks. Note that Mkrum is Krum when m = 1, and Mkrum is FedAvg when m = n . FL Trust involves the server with a small dataset to participate in each iteration and generate a gradient benchmark in each iteration. FL Trust would discard benign outliers. All clients just follow normal FL training without any extra rules to obey.



Sageflow: Robust Federated Learning against Both Stragglers and Adversaries (Supplementary Material)

Neural Information Processing Systems

The hyperparameter settings for Sageflow are shown in Table 1. Table 2. Backdoor attack: The hyperparameter details are shown in Table 4. Table 4: Hyperparameters for Sageflow with both stragglers and adversaries, under backdoor attackDataset γ λ δ E We specify these values in Table 5. The local batch size is set to 64. Figure 1 shows the performance under the no-scaled backdoor attack with only adversaries (no stragglers). Figure 1 shows the case with both stragglers and adversaries. Some additional experiments were conducted under model poisoning with the scale factor 10. Figure 1 The loss associated with a poisoned device increases if we increase the scale factor from 0.1 to 10. Sageflow but also Zeno+ can effectively defend against the attacks with only adversaries.